Goto

Collaborating Authors

 multi-source transfer learning



Learning New Tricks From Old Dogs: Multi-Source Transfer Learning From Pre-Trained Networks

Neural Information Processing Systems

The advent of deep learning algorithms for mobile devices and sensors has led to a dramatic expansion in the availability and number of systems trained on a wide range of machine learning tasks, creating a host of opportunities and challenges in the realm of transfer learning. Currently, most transfer learning methods require some kind of control over the systems learned, either by enforcing constraints during the source training, or through the use of a joint optimization objective between tasks that requires all data be co-located for training. However, for practical, privacy, or other reasons, in a variety of applications we may have no control over the individual source task training, nor access to source training samples. Instead we only have access to features pre-trained on such data as the output of black-boxes.'' For such scenarios, we consider the multi-source learning problem of training a classifier using an ensemble of pre-trained neural networks for a set of classes that have not been observed by any of the source networks, and for which we have very few training samples. We show that by using these distributed networks as feature extractors, we can train an effective classifier in a computationally-efficient manner using tools from (nonlinear) maximal correlation analysis. In particular, we develop a method we refer to as maximal correlation weighting (MCW) to build the required target classifier from an appropriate weighting of the feature functions from the source networks. We illustrate the effectiveness of the resulting classifier on datasets derived from the CIFAR-100, Stanford Dogs, and Tiny ImageNet datasets, and, in addition, use the methodology to characterize the relative value of different source tasks in learning a target task.


A Mathematical Framework for Quantifying Transferability in Multi-source Transfer Learning

Neural Information Processing Systems

Current transfer learning algorithm designs mainly focus on the similarities between source and target tasks, while the impacts of the sample sizes of these tasks are often not sufficiently addressed. This paper proposes a mathematical framework for quantifying the transferability in multi-source transfer learning problems, with both the task similarities and the sample complexity of learning models taken into account. In particular, we consider the setup where the models learned from different tasks are linearly combined for learning the target task, and use the optimal combining coefficients to measure the transferability. Then, we demonstrate the analytical expression of this transferability measure, characterized by the sample sizes, model complexity, and the similarities between source and target tasks, which provides fundamental insights of the knowledge transferring mechanism and the guidance for algorithm designs. Furthermore, we apply our analyses for practical learning tasks, and establish a quantifiable transferability measure by exploiting a parameterized model. In addition, we develop an alternating iterative algorithm to implement our theoretical results for training deep neural networks in multi-source transfer learning tasks. Finally, experiments on image classification tasks show that our approach outperforms existing transfer learning algorithms in multi-source and few-shot scenarios.



Reviews: Learning New Tricks From Old Dogs: Multi-Source Transfer Learning From Pre-Trained Networks

Neural Information Processing Systems

The maximal correlations and correlation functions are then used to predict the class for the target sample. The evaluation is done on 3 datasets (CIFAR100, Stanford Dogs, and Tiny Imagenet). The proposed MCW method is compared with SVM trained on output of the penultimate layer. For all the datasets, the Multi-Source MCW shows significant advantage especially when there are few samples.


Reviews: Learning New Tricks From Old Dogs: Multi-Source Transfer Learning From Pre-Trained Networks

Neural Information Processing Systems

The paper presents a very interesting and efficient technique for transfer learning which is validated on image classification data. The three reviewers agreed on the quality and siginficance of the contributions. I recommend acceptance as a poster presentation.


A Mathematical Framework for Quantifying Transferability in Multi-source Transfer Learning

Neural Information Processing Systems

Current transfer learning algorithm designs mainly focus on the similarities between source and target tasks, while the impacts of the sample sizes of these tasks are often not sufficiently addressed. This paper proposes a mathematical framework for quantifying the transferability in multi-source transfer learning problems, with both the task similarities and the sample complexity of learning models taken into account. In particular, we consider the setup where the models learned from different tasks are linearly combined for learning the target task, and use the optimal combining coefficients to measure the transferability. Then, we demonstrate the analytical expression of this transferability measure, characterized by the sample sizes, model complexity, and the similarities between source and target tasks, which provides fundamental insights of the knowledge transferring mechanism and the guidance for algorithm designs. Furthermore, we apply our analyses for practical learning tasks, and establish a quantifiable transferability measure by exploiting a parameterized model.


Learning New Tricks From Old Dogs: Multi-Source Transfer Learning From Pre-Trained Networks

Neural Information Processing Systems

The advent of deep learning algorithms for mobile devices and sensors has led to a dramatic expansion in the availability and number of systems trained on a wide range of machine learning tasks, creating a host of opportunities and challenges in the realm of transfer learning. Currently, most transfer learning methods require some kind of control over the systems learned, either by enforcing constraints during the source training, or through the use of a joint optimization objective between tasks that requires all data be co-located for training. However, for practical, privacy, or other reasons, in a variety of applications we may have no control over the individual source task training, nor access to source training samples. Instead we only have access to features pre-trained on such data as the output of "black-boxes.'' For such scenarios, we consider the multi-source learning problem of training a classifier using an ensemble of pre-trained neural networks for a set of classes that have not been observed by any of the source networks, and for which we have very few training samples.


An Efficient Evolutionary Deep Learning Framework Based on Multi-source Transfer Learning to Evolve Deep Convolutional Neural Networks

Wang, Bin, Xue, Bing, Zhang, Mengjie

arXiv.org Artificial Intelligence

Convolutional neural networks (CNNs) have constantly achieved better performance over years by introducing more complex topology, and enlarging the capacity towards deeper and wider CNNs. This makes the manual design of CNNs extremely difficult, so the automated design of CNNs has come into the research spotlight, which has obtained CNNs that outperform manually-designed CNNs. However, the computational cost is still the bottleneck of automatically designing CNNs. In this paper, inspired by transfer learning, a new evolutionary computation based framework is proposed to efficiently evolve CNNs without compromising the classification accuracy. The proposed framework leverages multi-source domains, which are smaller datasets than the target domain datasets, to evolve a generalised CNN block only once. And then, a new stacking method is proposed to both widen and deepen the evolved block, and a grid search method is proposed to find optimal stacking solutions. The experimental results show the proposed method acquires good CNNs faster than 15 peer competitors within less than 40 GPU-hours. Regarding the classification accuracy, the proposed method gains its strong competitiveness against the peer competitors, which achieves the best error rates of 3.46%, 18.36% and 1.76% for the CIFAR-10, CIFAR-100 and SVHN datasets, respectively.


Learning New Tricks From Old Dogs: Multi-Source Transfer Learning From Pre-Trained Networks

Lee, Joshua, Sattigeri, Prasanna, Wornell, Gregory

Neural Information Processing Systems

The advent of deep learning algorithms for mobile devices and sensors has led to a dramatic expansion in the availability and number of systems trained on a wide range of machine learning tasks, creating a host of opportunities and challenges in the realm of transfer learning. Currently, most transfer learning methods require some kind of control over the systems learned, either by enforcing constraints during the source training, or through the use of a joint optimization objective between tasks that requires all data be co-located for training. However, for practical, privacy, or other reasons, in a variety of applications we may have no control over the individual source task training, nor access to source training samples. Instead we only have access to features pre-trained on such data as the output of "black-boxes.'' For such scenarios, we consider the multi-source learning problem of training a classifier using an ensemble of pre-trained neural networks for a set of classes that have not been observed by any of the source networks, and for which we have very few training samples.